随着各种科学领域中数据的越来越多,生成模型在科学方法的每个步骤中都具有巨大的潜力来加速科学发现。他们最有价值的应用也许在于传统上提出假设最慢,最具挑战性的步骤。现在,正在从大量数据中学到强大的表示形式,以产生新的假设,这对从材料设计到药物发现的科学发现应用产生了重大影响。 GT4SD(https://github.com/gt4sd/gt4sd-core)是一个可扩展的开放源库,使科学家,开发人员和研究人员能够培训和使用科学发现中假设生成的最先进的生成模型。 GT4SD支持跨材料科学和药物发现的各种生成模型的用途,包括基于与目标蛋白,OMIC剖面,脚手架距离,结合能等性质的分子发现和设计。
translated by 谷歌翻译
仅观察一组有限的示例,应该能够从新分布中生成数据。在几次学习中,该模型经过了来自分布的许多集合的数据培训,这些分布共享了一些基本属性,例如来自不同字母的字符集或来自不同类别的对象。我们将当前的潜在变量模型扩展到具有基于注意力级聚合的基于注意力的点的完全层次结构方法,并将我们的方法scha-vae称为set-context层次层次结构 - 构造变异自动编码器。我们探索基于似然的模型比较,迭代数据采样和无适应性分布概括。我们的结果表明,分层公式可以更好地捕获小型数据制度中集合中的内在变异性。这项工作将深层可变方法推广到几乎没有学习的方法,迈出了一步,朝着大规模的几杆生成迈出了一步。
translated by 谷歌翻译
Accurate uncertainty quantification is necessary to enhance the reliability of deep learning models in real-world applications. In the case of regression tasks, prediction intervals (PIs) should be provided along with the deterministic predictions of deep learning models. Such PIs are useful or "high-quality'' as long as they are sufficiently narrow and capture most of the probability density. In this paper, we present a method to learn prediction intervals for regression-based neural networks automatically in addition to the conventional target predictions. In particular, we train two companion neural networks: one that uses one output, the target estimate, and another that uses two outputs, the upper and lower bounds of the corresponding PI. Our main contribution is the design of a loss function for the PI-generation network that takes into account the output of the target-estimation network and has two optimization objectives: minimizing the mean prediction interval width and ensuring the PI integrity using constraints that maximize the prediction interval probability coverage implicitly. Both objectives are balanced within the loss function using a self-adaptive coefficient. Furthermore, we apply a Monte Carlo-based approach that evaluates the model uncertainty in the learned PIs. Experiments using a synthetic dataset, six benchmark datasets, and a real-world crop yield prediction dataset showed that our method was able to maintain a nominal probability coverage and produce narrower PIs without detriment to its target estimation accuracy when compared to those PIs generated by three state-of-the-art neural-network-based methods.
translated by 谷歌翻译
A quantitative assessment of the global importance of an agent in a team is as valuable as gold for strategists, decision-makers, and sports coaches. Yet, retrieving this information is not trivial since in a cooperative task it is hard to isolate the performance of an individual from the one of the whole team. Moreover, it is not always clear the relationship between the role of an agent and his personal attributes. In this work we conceive an application of the Shapley analysis for studying the contribution of both agent policies and attributes, putting them on equal footing. Since the computational complexity is NP-hard and scales exponentially with the number of participants in a transferable utility coalitional game, we resort to exploiting a-priori knowledge about the rules of the game to constrain the relations between the participants over a graph. We hence propose a method to determine a Hierarchical Knowledge Graph of agents' policies and features in a Multi-Agent System. Assuming a simulator of the system is available, the graph structure allows to exploit dynamic programming to assess the importances in a much faster way. We test the proposed approach in a proof-of-case environment deploying both hardcoded policies and policies obtained via Deep Reinforcement Learning. The proposed paradigm is less computationally demanding than trivially computing the Shapley values and provides great insight not only into the importance of an agent in a team but also into the attributes needed to deploy the policy at its best.
translated by 谷歌翻译
In recent years there has been growing attention to interpretable machine learning models which can give explanatory insights on their behavior. Thanks to their interpretability, decision trees have been intensively studied for classification tasks, and due to the remarkable advances in mixed-integer programming (MIP), various approaches have been proposed to formulate the problem of training an Optimal Classification Tree (OCT) as a MIP model. We present a novel mixed-integer quadratic formulation for the OCT problem, which exploits the generalization capabilities of Support Vector Machines for binary classification. Our model, denoted as Margin Optimal Classification Tree (MARGOT), encompasses the use of maximum margin multivariate hyperplanes nested in a binary tree structure. To enhance the interpretability of our approach, we analyse two alternative versions of MARGOT, which include feature selection constraints inducing local sparsity of the hyperplanes. First, MARGOT has been tested on non-linearly separable synthetic datasets in 2-dimensional feature space to provide a graphical representation of the maximum margin approach. Finally, the proposed models have been tested on benchmark datasets from the UCI repository. The MARGOT formulation turns out to be easier to solve than other OCT approaches, and the generated tree better generalizes on new observations. The two interpretable versions are effective in selecting the most relevant features and maintaining good prediction quality.
translated by 谷歌翻译
Hierarchical time series are common in several applied fields. Forecasts are required to be coherent, that is, to satisfy the constraints given by the hierarchy. The most popular technique to enforce coherence is called reconciliation, which adjusts the base forecasts computed for each time series. However, recent works on probabilistic reconciliation present several limitations. In this paper, we propose a new approach based on conditioning to reconcile any type of forecast distribution. We then introduce a new algorithm, called Bottom-Up Importance Sampling, to efficiently sample from the reconciled distribution. It can be used for any base forecast distribution: discrete, continuous, or in the form of samples, providing a major speedup compared to the current methods. Experiments on several temporal hierarchies show a significant improvement over base probabilistic forecasts.
translated by 谷歌翻译
由于存在对抗性攻击,因此在安全至关重要系统中使用神经网络需要安全,可靠的模型。了解任何输入X的最小对抗扰动,或等效地知道X与分类边界的距离,可以评估分类鲁棒性,从而提供可认证的预测。不幸的是,计算此类距离的最新技术在计算上很昂贵,因此不适合在线应用程序。这项工作提出了一个新型的分类器家族,即签名的距离分类器(SDC),从理论的角度来看,它直接输出X与分类边界的确切距离,而不是概率分数(例如SoftMax)。 SDC代表一个强大的设计分类器家庭。为了实际解决SDC的理论要求,提出了一种名为Unitary级别神经网络的新型网络体系结构。实验结果表明,所提出的体系结构近似于签名的距离分类器,因此允许以单个推断为代价对X进行在线认证分类。
translated by 谷歌翻译
联合学习是用于培训分布式,敏感数据的培训模型的流行策略,同时保留了数据隐私。先前的工作确定了毒害数据或模型的联合学习方案的一系列安全威胁。但是,联合学习是一个网络系统,客户与服务器之间的通信对于学习任务绩效起着至关重要的作用。我们强调了沟通如何在联邦学习中引入另一个漏洞表面,并研究网络级对手对训练联合学习模型的影响。我们表明,从精心选择的客户中删除网络流量的攻击者可以大大降低目标人群的模型准确性。此外,我们表明,来自少数客户的协调中毒运动可以扩大降低攻击。最后,我们开发了服务器端防御,通过识别和上采样的客户可能对目标准确性做出积极贡献,从而减轻了攻击的影响。我们在三个数据集上全面评估了我们的攻击和防御,假设具有网络部分可见性的加密通信渠道和攻击者。
translated by 谷歌翻译
传统上,音乐混合涉及以干净,单个曲目的形式录制乐器,并使用音频效果和专家知识(例如,混合工程师)将它们融合到最终混合物中。近年来,音乐制作任务的自动化已成为一个新兴领域,基于规则的方法和机器学习方法已被探索。然而,缺乏干燥或干净的仪器记录限制了这种模型的性能,这与专业的人造混合物相去甚远。我们探索是否可以使用室外数据,例如潮湿或加工的多轨音乐录音,并将其重新利用以训练有监督的深度学习模型,以弥合自动混合质量的当前差距。为了实现这一目标,我们提出了一种新型的数据预处理方法,该方法允许模型执行自动音乐混合。我们还重新设计了一种用于评估音乐混合系统的听力测试方法。我们使用经验丰富的混合工程师作为参与者来验证结果。
translated by 谷歌翻译
因子图是用于代表机器人技术各种问题的图形模型,例如运动(SFM),同时定位和映射(SLAM)和校准。通常,在他们的核心上,他们有一个优化问题,其术语仅取决于一小部分变量。因子图解决器利用问题的局部性,以大大减少迭代最小二乘(ILS)方法的计算时间。尽管非常强大,但他们的应用通常仅限于无约束的问题。在本文中,我们通过引入Lagrange乘数方法的因子图版本来对因子图内的变量进行建模。我们通过根据因子图提供完整的导航堆栈来显示我们方法的潜力。与标准导航堆栈不同,我们可以使用因子图对本地规划和本地化的最佳控制建模,并使用标准ILS方法来解决这两个问题。我们在现实世界自主导航方案中验证了我们的方法,并将其与ROS中实现的事实上的标准导航堆栈进行了比较。比较实验表明,对于手头的应用程序,我们的系统优于运行时的标准非线性编程求解器内部优化器(IPOPT),同时实现了类似的解决方案。
translated by 谷歌翻译